Differentially Private EBMs#

See the reference paper for full details [1]. Link

Code Example#

The following code will train a DPEBM classifier for the adult income dataset. The visualizations provided will be for both global and local explanations.

from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score

from interpret.privacy import DPExplainableBoostingClassifier
from interpret import show

df = pd.read_csv(
    "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
    header=None)
df.columns = [
    "Age", "WorkClass", "fnlwgt", "Education", "EducationNum",
    "MaritalStatus", "Occupation", "Relationship", "Race", "Gender",
    "CapitalGain", "CapitalLoss", "HoursPerWeek", "NativeCountry", "Income"
]
X = df.iloc[:, :-1]
y = df.iloc[:, -1]

feature_types = ['continuous', 'nominal', 'continuous', 'nominal',
    'continuous', 'nominal', 'nominal', 'nominal', 'nominal', 'nominal',
    'continuous', 'continuous', 'continuous', 'nominal']

privacy_bounds = {"Age": (17, 90), "fnlwgt": (12285, 1484705), 
    "EducationNum": (1, 16), "CapitalGain": (0, 99999), 
    "CapitalLoss": (0, 4356), "HoursPerWeek": (1, 99)
}

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)

dpebm = DPExplainableBoostingClassifier(random_state=None, epsilon=1.0, delta=1e-5, 
    feature_types=feature_types, privacy_bounds=privacy_bounds)
dpebm.fit(X_train, y_train)

auc = roc_auc_score(y_test, dpebm.predict_proba(X_test)[:, 1])
print("AUC: {:.3f}".format(auc))
AUC: 0.881
show(dpebm.explain_global())




show(dpebm.explain_local(X_test[:5], y_test[:5]), 0)




Bibliography#

[1] Harsha Nori, Rich Caruana, Zhiqi Bu, Judy Hanwen Shen, and Janardhan Kulkarni. Accuracy, Interpretability, and Differential Privacy via Explainable Boosting. In Proceedings of the 38th International Conference on Machine Learning, 8227-8237. 2021. Paper Link

API#

DPExplainableBoostingClassifier#

class interpret.privacy.DPExplainableBoostingClassifier(feature_names=None, feature_types=None, max_bins=32, exclude=[], validation_size=0, outer_bags=1, learning_rate=0.01, max_rounds=300, max_leaves=3, objective='log_loss', n_jobs=- 2, random_state=None, epsilon=1.0, delta=1e-05, composition='gdp', bin_budget_frac=0.1, privacy_bounds=None)#

Differentially Private Explainable Boosting Classifier. Note that many arguments are defaulted differently than regular EBMs.

Parameters:
  • feature_names (list of str, default=None) – List of feature names.

  • feature_types (list of FeatureType, default=None) –

    List of feature types. For DP-EBMs, feature_types should be fully specified. The auto-detector, if used, examines the data and is not included in the privacy budget. If auto-detection is used, a privacy warning will be issued. FeatureType can be:

    • None: Auto-detect (privacy budget is not respected!).

    • ’continuous’: Use private continuous binning.

    • [List of str]: Ordinal categorical where the order has meaning. Eg: [“low”, “medium”, “high”]. Uses private categorical binning.

    • ’ordinal’: Ordinal categorical where the order is determined by sorting the feature strings. Uses private categorical binning.

    • ’nominal’: Categorical where the order has no meaning. Eg: country names. Uses private categorical binning.

  • max_bins (int, default=32) – Max number of bins per feature.

  • exclude (list of tuples of feature indices|names, default=[]) – Features to be excluded.

  • validation_size (int or float, default=0) –

    Validation set size. A validation set is needed if outer bags or error bars are desired.

    • Integer (1 <= validation_size): Count of samples to put in the validation sets

    • Percentage (validation_size < 1.0): Percentage of the data to put in the validation sets

    • 0: Outer bags have no utility and error bounds will be eliminated

  • outer_bags (int, default=1) – Number of outer bags. Outer bags are used to generate error bounds and help with smoothing the graphs.

  • learning_rate (float, default=0.01) – Learning rate for boosting.

  • max_rounds (int, default=300) – Total number of boosting rounds with n_terms boosting steps per round.

  • max_leaves (int, default=3) – Maximum number of leaves allowed in each tree.

  • objective (str, default="log_loss") – The objective to optimize.

  • n_jobs (int, default=-2) – Number of jobs to run in parallel. Negative integers are interpreted as following joblib’s formula (n_cpus + 1 + n_jobs), just like scikit-learn. Eg: -2 means using all threads except 1.

  • random_state (int or None, default=None) – Random state. None uses device_random and generates non-repeatable sequences. Should be set to ‘None’ for privacy, but can be set to an integer for testing and repeatability.

  • epsilon (float, default=1.0) – Total privacy budget to be spent.

  • delta (float, default=1e-5) – Additive component of differential privacy guarantee. Should be smaller than 1/n_training_samples.

  • composition ({'gdp', 'classic'}, default='gdp') – Method of tracking noise aggregation.

  • bin_budget_frac (float, default=0.1) – Percentage of total epsilon budget to use for private binning.

  • privacy_bounds (Union[np.ndarray, Mapping[Union[int, str], Tuple[float, float]]], default=None) – Specifies known min/max values for each feature. If None, DP-EBM shows a warning and uses the data to determine these values.

Variables:
  • classes_ (array of bool, int, or unicode with shape (2,)) – The class labels. DPExplainableBoostingClassifier only supports binary classification, so there are 2 classes.

  • n_features_in_ (int) – Number of features.

  • feature_names_in_ (List of str) – Resolved feature names. Names can come from feature_names, X, or be auto-generated.

  • feature_types_in_ (List of str) – Resolved feature types. Can be: ‘continuous’, ‘nominal’, or ‘ordinal’.

  • bins_ (List[Union[List[Dict[str, int]], List[array of float with shape (n_cuts,)]]]) – Per-feature list that defines how to bin each feature. Each feature in the list contains a list of binning resolutions. The first item in the binning resolution list is for binning main effect features. If there are more items in the binning resolution list, they define the binning for successive levels of resolutions. The item at index 1, if it exists, defines the binning for pairs. The last binning resolution defines the bins for all successive interaction levels. If the binning resolution list contains dictionaries, then the feature is either a ‘nominal’ or ‘ordinal’ categorical. If the binning resolution list contains arrays, then the feature is ‘continuous’ and the arrays will contain float cut points that separate continuous values into bins.

  • feature_bounds_ (array of float with shape (n_features, 2)) – min/max bounds for each feature. feature_bounds_[feature_index, 0] is the min value of the feature and feature_bounds_[feature_index, 1] is the max value of the feature. Categoricals have min & max values of NaN.

  • term_features_ (List of tuples of feature indices) – Additive terms used in the model and their component feature indices.

  • term_names_ (List of str) – List of term names.

  • bin_weights_ (List of array of float with shape (n_bins)) – Per-term list of the total sample weights in each term’s bins.

  • bagged_scores_ (List of array of float with shape (n_outer_bags, n_bins)) – Per-term list of the bagged model scores.

  • term_scores_ (List of array of float with shape (n_bins)) – Per-term list of the model scores.

  • standard_deviations_ (List of array of float with shape (n_bins)) – Per-term list of the standard deviations of the bagged model scores.

  • bag_weights_ (array of float with shape (n_outer_bags,)) – Per-bag record of the total weight within each bag.

  • breakpoint_iteration_ (array of int with shape (n_stages, n_outer_bags)) – The number of boosting rounds performed within each stage. Normally, the count of main effects boosting rounds will be in breakpoint_iteration_[0].

  • intercept_ (array of float with shape (1,)) – Intercept of the model.

  • noise_scale_binning_ (float) – The noise scale during binning.

  • noise_scale_boosting_ (float) – The noise scale during boosting.

decision_function(X, init_score=None)#

Predict scores from model before calling the link function.

Parameters:
  • X – Numpy array for samples.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

The sum of the additive term contributions.

explain_global(name=None)#

Provides global explanation for model.

Parameters:

name – User-defined explanation name.

Returns:

An explanation object, visualizing feature-value pairs as horizontal bar chart.

explain_local(X, y=None, name=None, init_score=None)#

Provides local explanations for provided samples.

Parameters:
  • X – Numpy array for X to explain.

  • y – Numpy vector for y to explain.

  • name – User-defined explanation name.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

An explanation object, visualizing feature-value pairs for each sample as horizontal bar charts.

fit(X, y, sample_weight=None, init_score=None)#

Fits model to provided samples.

Parameters:
  • X – Numpy array for training samples.

  • y – Numpy array as training labels.

  • sample_weight – Optional array of weights per sample. Should be same length as X and y.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

Itself.

monotonize(term, increasing='auto')#

Adjusts a term to be monotone using isotonic regression.

Parameters:
  • term – Index or name of continuous univariate term to apply monotone constraints

  • increasing – ‘auto’ or bool. ‘auto’ decides direction based on Spearman correlation estimate.

Returns:

Itself.

predict(X, init_score=None)#

Predicts on provided samples.

Parameters:
  • X – Numpy array for samples.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

Predicted class label per sample.

predict_proba(X, init_score=None)#

Probability estimates on provided samples.

Parameters:
  • X – Numpy array for samples.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

Probability estimate of sample for each class.

score(X, y, sample_weight=None)#

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score – Mean accuracy of self.predict(X) w.r.t. y.

Return type:

float

term_importances(importance_type='avg_weight')#

Provides the term importances

Parameters:

importance_type – the type of term importance requested (‘avg_weight’, ‘min_max’)

Returns:

An array term importances with one importance per additive term

DPExplainableBoostingRegressor#

class interpret.privacy.DPExplainableBoostingRegressor(feature_names=None, feature_types=None, max_bins=32, exclude=[], validation_size=0, outer_bags=1, learning_rate=0.01, max_rounds=300, max_leaves=3, objective='rmse', n_jobs=- 2, random_state=None, epsilon=1.0, delta=1e-05, composition='gdp', bin_budget_frac=0.1, privacy_bounds=None, privacy_target_min=None, privacy_target_max=None)#

Differentially Private Explainable Boosting Regressor. Note that many arguments are defaulted differently than regular EBMs.

Parameters:
  • feature_names (list of str, default=None) – List of feature names.

  • feature_types (list of FeatureType, default=None) –

    List of feature types. For DP-EBMs, feature_types should be fully specified. The auto-detector, if used, examines the data and is not included in the privacy budget. If auto-detection is used, a privacy warning will be issued. FeatureType can be:

    • None: Auto-detect (privacy budget is not respected!).

    • ’continuous’: Use private continuous binning.

    • [List of str]: Ordinal categorical where the order has meaning. Eg: [“low”, “medium”, “high”]. Uses private categorical binning.

    • ’ordinal’: Ordinal categorical where the order is determined by sorting the feature strings. Uses private categorical binning.

    • ’nominal’: Categorical where the order has no meaning. Eg: country names. Uses private categorical binning.

  • max_bins (int, default=32) – Max number of bins per feature.

  • exclude (list of tuples of feature indices|names, default=[]) – Features to be excluded.

  • validation_size (int or float, default=0) –

    Validation set size. A validation set is needed if outer bags or error bars are desired.

    • Integer (1 <= validation_size): Count of samples to put in the validation sets

    • Percentage (validation_size < 1.0): Percentage of the data to put in the validation sets

    • 0: Outer bags have no utility and error bounds will be eliminated

  • outer_bags (int, default=1) – Number of outer bags. Outer bags are used to generate error bounds and help with smoothing the graphs.

  • learning_rate (float, default=0.01) – Learning rate for boosting.

  • max_rounds (int, default=300) – Total number of boosting rounds with n_terms boosting steps per round.

  • max_leaves (int, default=3) – Maximum number of leaves allowed in each tree.

  • objective (str, default="rmse") – The objective to optimize. Options include: “rmse”, “gamma_deviance”, “poisson_deviance:max_delta_step=0.7”, “pseudo_huber:delta=1.0”, “rmse_log” (rmse with a log link function)

  • n_jobs (int, default=-2) – Number of jobs to run in parallel. Negative integers are interpreted as following joblib’s formula (n_cpus + 1 + n_jobs), just like scikit-learn. Eg: -2 means using all threads except 1.

  • random_state (int or None, default=None) – Random state. None uses device_random and generates non-repeatable sequences. Should be set to ‘None’ for privacy, but can be set to an integer for testing and repeatability.

  • epsilon (float, default=1.0) – Total privacy budget to be spent.

  • delta (float, default=1e-5) – Additive component of differential privacy guarantee. Should be smaller than 1/n_training_samples.

  • composition ({'gdp', 'classic'}, default='gdp') – Method of tracking noise aggregation.

  • bin_budget_frac (float, default=0.1) – Percentage of total epsilon budget to use for private binning.

  • privacy_bounds (Union[np.ndarray, Mapping[Union[int, str], Tuple[float, float]]], default=None) – Specifies known min/max values for each feature. If None, DP-EBM shows a warning and uses the data to determine these values.

  • privacy_target_min (float, default=None) – Known target minimum. ‘y’ values will be clipped to this min. If None, DP-EBM shows a warning and uses the data to determine this value.

  • privacy_target_max (float, default=None) – Known target maximum. ‘y’ values will be clipped to this max. If None, DP-EBM shows a warning and uses the data to determine this value.

Variables:
  • n_features_in_ (int) – Number of features.

  • feature_names_in_ (List of str) – Resolved feature names. Names can come from feature_names, X, or be auto-generated.

  • feature_types_in_ (List of str) – Resolved feature types. Can be: ‘continuous’, ‘nominal’, or ‘ordinal’.

  • bins_ (List[Union[List[Dict[str, int]], List[array of float with shape (n_cuts,)]]]) – Per-feature list that defines how to bin each feature. Each feature in the list contains a list of binning resolutions. The first item in the binning resolution list is for binning main effect features. If there are more items in the binning resolution list, they define the binning for successive levels of resolutions. The item at index 1, if it exists, defines the binning for pairs. The last binning resolution defines the bins for all successive interaction levels. If the binning resolution list contains dictionaries, then the feature is either a ‘nominal’ or ‘ordinal’ categorical. If the binning resolution list contains arrays, then the feature is ‘continuous’ and the arrays will contain float cut points that separate continuous values into bins.

  • feature_bounds_ (array of float with shape (n_features, 2)) – min/max bounds for each feature. feature_bounds_[feature_index, 0] is the min value of the feature and feature_bounds_[feature_index, 1] is the max value of the feature. Categoricals have min & max values of NaN.

  • term_features_ (List of tuples of feature indices) – Additive terms used in the model and their component feature indices.

  • term_names_ (List of str) – List of term names.

  • bin_weights_ (List of array of float with shape (n_bins)) – Per-term list of the total sample weights in each term’s bins.

  • bagged_scores_ (List of array of float with shape (n_outer_bags, n_bins)) – Per-term list of the bagged model scores.

  • term_scores_ (List of array of float with shape (n_bins)) – Per-term list of the model scores.

  • standard_deviations_ (List of array of float with shape (n_bins)) – Per-term list of the standard deviations of the bagged model scores.

  • bag_weights_ (array of float with shape (n_outer_bags,)) – Per-bag record of the total weight within each bag.

  • breakpoint_iteration_ (array of int with shape (n_stages, n_outer_bags)) – The number of boosting rounds performed within each stage. Normally, the count of main effects boosting rounds will be in breakpoint_iteration_[0].

  • intercept_ (float) – Intercept of the model.

  • min_target_ (float) – The minimum value found in ‘y’, or privacy_target_min if provided.

  • max_target_ (float) – The maximum value found in ‘y’, or privacy_target_max if provided.

  • noise_scale_binning_ (float) – The noise scale during binning.

  • noise_scale_boosting_ (float) – The noise scale during boosting.

decision_function(X, init_score=None)#

Predict scores from model before calling the link function.

Parameters:
  • X – Numpy array for samples.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

The sum of the additive term contributions.

explain_global(name=None)#

Provides global explanation for model.

Parameters:

name – User-defined explanation name.

Returns:

An explanation object, visualizing feature-value pairs as horizontal bar chart.

explain_local(X, y=None, name=None, init_score=None)#

Provides local explanations for provided samples.

Parameters:
  • X – Numpy array for X to explain.

  • y – Numpy vector for y to explain.

  • name – User-defined explanation name.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

An explanation object, visualizing feature-value pairs for each sample as horizontal bar charts.

fit(X, y, sample_weight=None, init_score=None)#

Fits model to provided samples.

Parameters:
  • X – Numpy array for training samples.

  • y – Numpy array as training labels.

  • sample_weight – Optional array of weights per sample. Should be same length as X and y.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

Itself.

monotonize(term, increasing='auto')#

Adjusts a term to be monotone using isotonic regression.

Parameters:
  • term – Index or name of continuous univariate term to apply monotone constraints

  • increasing – ‘auto’ or bool. ‘auto’ decides direction based on Spearman correlation estimate.

Returns:

Itself.

predict(X, init_score=None)#

Predicts on provided samples.

Parameters:
  • X – Numpy array for samples.

  • init_score – Optional. Either a model that can generate scores or per-sample initialization score. If samples scores it should be the same length as X.

Returns:

Predicted class label per sample.

score(X, y, sample_weight=None)#

Return the coefficient of determination of the prediction.

The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

score\(R^2\) of self.predict(X) w.r.t. y.

Return type:

float

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

term_importances(importance_type='avg_weight')#

Provides the term importances

Parameters:

importance_type – the type of term importance requested (‘avg_weight’, ‘min_max’)

Returns:

An array term importances with one importance per additive term